Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
HardwareX ; 13: e00381, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36483327

RESUMO

Environmental and water quality monitoring are of utmost interest in a context where land use changes, uncontrolled agricultural practices, human settlements, tourism and other activities affect a watershed and condition the usage of their surface waters. Such is the case of Mar Menor lagoon in Southeast of Spain, where the EU H2020 SMARTLAGOON project stands and is implementing an intelligent environmental infrastructure and modelling that will let the construction of a digital twin of the lagoon. Performing environmental monitoring is expensive and the number of sampling locations is typically limited by the budget. For this reason, we have developed a low-cost monitoring system that can be integrated in a small-sized buoy and attached to fishing and recreational boats allowing citizens to gather water quality information - i.e. electrical conductivity and temperature - with the use of their smartphones. The usage of such devices leads to key stakeholder engagement and citizen science activities that could enrich and ease the data gathering process.

2.
Sci Rep ; 11(1): 15173, 2021 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-34312455

RESUMO

We are witnessing the dramatic consequences of the COVID-19 pandemic which, unfortunately, go beyond the impact on the health system. Until herd immunity is achieved with vaccines, the only available mechanisms for controlling the pandemic are quarantines, perimeter closures and social distancing with the aim of reducing mobility. Governments only apply these measures for a reduced period, since they involve the closure of economic activities such as tourism, cultural activities, or nightlife. The main criterion for establishing these measures and planning socioeconomic subsidies is the evolution of infections. However, the collapse of the health system and the unpredictability of human behavior, among others, make it difficult to predict this evolution in the short to medium term. This article evaluates different models for the early prediction of the evolution of the COVID-19 pandemic to create a decision support system for policy-makers. We consider a wide branch of models including artificial neural networks such as LSTM and GRU and statistically based models such as autoregressive (AR) or ARIMA. Moreover, several consensus strategies to ensemble all models into one system are proposed to obtain better results in this uncertain environment. Finally, a multivariate model that includes mobility data provided by Google is proposed to better forecast trend changes in the 14-day CI. A real case study in Spain is evaluated, providing very accurate results for the prediction of 14-day CI in scenarios with and without trend changes, reaching 0.93 [Formula: see text], 4.16 RMSE and 1.08 MAE.


Assuntos
COVID-19/epidemiologia , Inteligência Artificial , Previsões , Humanos , Incidência , Modelos Estatísticos , Redes Neurais de Computação , Espanha/epidemiologia
3.
Bioinformatics ; 37(11): 1515-1520, 2021 07 12.
Artigo em Inglês | MEDLINE | ID: mdl-31960899

RESUMO

MOTIVATION: Molecular docking methods are extensively used to predict the interaction between protein-ligand systems in terms of structure and binding affinity, through the optimization of a physics-based scoring function. However, the computational requirements of these simulations grow exponentially with: (i) the global optimization procedure, (ii) the number and degrees of freedom of molecular conformations generated and (iii) the mathematical complexity of the scoring function. RESULTS: In this work, we introduce a novel molecular docking method named METADOCK 2, which incorporates several novel features, such as (i) a ligand-dependent blind docking approach that exhaustively scans the whole protein surface to detect novel allosteric sites, (ii) an optimization method to enable the use of a wide branch of metaheuristics and (iii) a heterogeneous implementation based on multicore CPUs and multiple graphics processing units. Two representative scoring functions implemented in METADOCK 2 are extensively evaluated in terms of computational performance and accuracy using several benchmarks (such as the well-known DUD) against AutoDock 4.2 and AutoDock Vina. Results place METADOCK 2 as an efficient and accurate docking methodology able to deal with complex systems where computational demands are staggering and which outperforms both AutoDock Vina and AutoDock 4. AVAILABILITY AND IMPLEMENTATION: https://Baldoimbernon@bitbucket.org/Baldoimbernon/metadock_2.git. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Assuntos
Proteínas , Ligantes , Conformação Molecular , Simulação de Acoplamento Molecular
4.
Sensors (Basel) ; 20(24)2020 Dec 12.
Artigo em Inglês | MEDLINE | ID: mdl-33322717

RESUMO

Precision agriculture is a growing sector that improves traditional agricultural processes through the use of new technologies. In southeast Spain, farmers are continuously fighting against harsh conditions caused by the effects of climate change. Among these problems, the great variability of temperatures (up to 20 °C in the same day) stands out. This causes the stone fruit trees to flower prematurely and the low winter temperatures freeze the flower causing the loss of the crop. Farmers use anti-freeze techniques to prevent crop loss and the most widely used techniques are those that use water irrigation as they are cheaper than other techniques. However, these techniques waste too much water and it is a scarce resource, especially in this area. In this article, we propose a novel intelligent Internet of Things (IoT) monitoring system to optimize the use of water in these anti-frost techniques while minimizing crop loss. The intelligent component of the IoT system is designed using an approach based on a multivariate Long Short-Term Memory (LSTM) model, designed to predict low temperatures. We compare the proposed approach of multivariate model with the univariate counterpart version to figure out which model obtains better accuracy to predict low temperatures. An accurate prediction of low temperatures would translate into significant water savings, as anti-frost techniques would not be activated without being necessary. Our experimental results show that the proposed multivariate LSTM approach improves the univariate counterpart version, obtaining an average quadratic error no greater than 0.65 °C and a coefficient of determination R2 greater than 0.97. The proposed system has been deployed and is currently operating in a real environment obtained satisfactory performance.

5.
Sensors (Basel) ; 20(21)2020 Nov 06.
Artigo em Inglês | MEDLINE | ID: mdl-33172017

RESUMO

Internet of Things (IoT) is becoming a new socioeconomic revolution in which data and immediacy are the main ingredients. IoT generates large datasets on a daily basis but it is currently considered as "dark data", i.e., data generated but never analyzed. The efficient analysis of this data is mandatory to create intelligent applications for the next generation of IoT applications that benefits society. Artificial Intelligence (AI) techniques are very well suited to identifying hidden patterns and correlations in this data deluge. In particular, clustering algorithms are of the utmost importance for performing exploratory data analysis to identify a set (a.k.a., cluster) of similar objects. Clustering algorithms are computationally heavy workloads and require to be executed on high-performance computing clusters, especially to deal with large datasets. This execution on HPC infrastructures is an energy hungry procedure with additional issues, such as high-latency communications or privacy. Edge computing is a paradigm to enable light-weight computations at the edge of the network that has been proposed recently to solve these issues. In this paper, we provide an in-depth analysis of emergent edge computing architectures that include low-power Graphics Processing Units (GPUs) to speed-up these workloads. Our analysis includes performance and power consumption figures of the latest Nvidia's AGX Xavier to compare the energy-performance ratio of these low-cost platforms with a high-performance cloud-based counterpart version. Three different clustering algorithms (i.e., k-means, Fuzzy Minimals (FM), and Fuzzy C-Means (FCM)) are designed to be optimally executed on edge and cloud platforms, showing a speed-up factor of up to 11× for the GPU code compared to sequential counterpart versions in the edge platforms and energy savings of up to 150% between the edge computing and HPC platforms.

6.
Artigo em Inglês | MEDLINE | ID: mdl-32069834

RESUMO

The Mar Menor is a hypersaline coastal lagoon with high environmental value and a characteristic example of a highly anthropized hydro-ecosystem located in the southeast of Spain. An unprecedented eutrophication crisis in 2016 and 2019 with abrupt changes in the quality of its waters caused a great social alarm. Understanding and modeling the level of a eutrophication indicator, such as chlorophyll-a (Chl-a), benefits the management of this complex system. In this study, we investigate the potential machine learning (ML) methods to predict the level of Chl-a. Particularly, Multilayer Neural Networks (MLNNs) and Support Vector Regressions (SVRs) are evaluated using as a target dataset information of up to nine different water quality parameters. The most relevant input combinations were extracted using wrapper feature selection methods which simplified the structure of the model, resulting in a more accurate and efficient procedure. Although the performance in the validation phase showed that SVR models obtained better results than MLNNs, experimental results indicated that both ML algorithms provide satisfactory results in the prediction of Chl-a concentration, reaching up to 0.7 R2CV (cross-validated coefficient of determination) for the best-fit models.


Assuntos
Algoritmos , Ecossistema , Eutrofização , Aprendizado de Máquina , Monitoramento Ambiental , Espanha
7.
Sensors (Basel) ; 20(3)2020 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-32046231

RESUMO

Wireless acoustic sensor networks are nowadays an essential tool for noise pollution monitoring and managing in cities. The increased computing capacity of the nodes that create the network is allowing the addition of processing algorithms and artificial intelligence that provide more information about the sound sources and environment, e.g., detect sound events or calculate loudness. Several models to predict sound pressure levels in cities are available, mainly road, railway and aerial traffic noise. However, these models are mostly based in auxiliary data, e.g., vehicles flow or street geometry, and predict equivalent levels for a temporal long-term. Therefore, forecasting of temporal short-term sound levels could be a helpful tool for urban planners and managers. In this work, a Long Short-Term Memory (LSTM) deep neural network technique is proposed to model temporal behavior of sound levels at a certain location, both sound pressure level and loudness level, in order to predict near-time future values. The proposed technique can be trained for and integrated in every node of a sensor network to provide novel functionalities, e.g., a method of early warning against noise pollution and of backup in case of node or network malfunction. To validate this approach, one-minute period equivalent sound levels, captured in a two-month measurement campaign by a node of a deployed network of acoustic sensors, have been used to train it and to obtain different forecasting models. Assessments of the developed LSTM models and Auto regressive integrated moving average models were performed to predict sound levels for several time periods, from 1 to 60 min. Comparison of the results show that the LSTM models outperform the statistics-based models. In general, the LSTM models achieve a prediction of values with a mean square error less than 4.3 dB for sound pressure level and less than 2 phons for loudness. Moreover, the goodness of fit of the LSTM models and the behavior pattern of the data in terms of prediction of sound levels are satisfactory.

8.
Sensors (Basel) ; 19(22)2019 Nov 08.
Artigo em Inglês | MEDLINE | ID: mdl-31717423

RESUMO

Road traffic pollution is one of the key factors affecting urban air quality. There is a consensus in the community that the efficient use of public transport is the most effective solution. In that sense, much effort has been made in the data mining discipline to come up with solutions able to anticipate taxi demands in a city. This helps to optimize the trips made by such an important urban means of transport. However, most of the existing solutions in the literature define the taxi demand prediction as a regression problem based on historical taxi records. This causes serious limitations with respect to the required data to operate and the interpretability of the prediction outcome. In this paper, we introduce QUADRIVEN (QUalitative tAxi Demand pRediction based on tIme-Variant onlinE social Network data analysis), a novel approach to deal with the taxi demand prediction problem based on human-generated data widely available on online social networks. The result of the prediction is defined on the basis of categorical labels that allow obtaining a semantically-enriched output. Finally, this proposal was tested with different models in a large urban area, showing quite promising results with an F1 score above 0.8.

9.
Curr Drug Targets ; 17(14): 1626-1648, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26844561

RESUMO

The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.


Assuntos
Biologia Computacional/instrumentação , Proteínas/química , Algoritmos , Biologia Computacional/métodos , Humanos , Modelos Moleculares , Conformação Proteica , Dobramento de Proteína , Software
11.
Biomed Res Int ; 2014: 474219, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25025055

RESUMO

Bioinformatics is an interdisciplinary research field that develops tools for the analysis of large biological databases, and, thus, the use of high performance computing (HPC) platforms is mandatory for the generation of useful biological knowledge. The latest generation of graphics processing units (GPUs) has democratized the use of HPC as they push desktop computers to cluster-level performance. Many applications within this field have been developed to leverage these powerful and low-cost architectures. However, these applications still need to scale to larger GPU-based systems to enable remarkable advances in the fields of healthcare, drug discovery, genome research, etc. The inclusion of GPUs in HPC systems exacerbates power and temperature issues, increasing the total cost of ownership (TCO). This paper explores the benefits of volunteer computing to scale bioinformatics applications as an alternative to own large GPU-based local infrastructures. We use as a benchmark a GPU-based drug discovery application called BINDSURF that their computational requirements go beyond a single desktop machine. Volunteer computing is presented as a cheap and valid HPC system for those bioinformatics applications that need to process huge amounts of data and where the response time is not a critical factor.


Assuntos
Biologia Computacional/métodos , Gráficos por Computador , Metodologias Computacionais , Descoberta de Drogas/métodos , Bases de Dados Factuais , Humanos , Software
12.
J Chem Inf Model ; 53(8): 2057-64, 2013 Aug 26.
Artigo em Inglês | MEDLINE | ID: mdl-23862733

RESUMO

Conformational entropy calculation, usually computed by normal-mode analysis (NMA) or quasi harmonic analysis (QHA), is extremely time-consuming. Here, instead of NMA or QHA, a solvent accessible surface area (SASA) based model was employed to compute the conformational entropy, and a new fast GPU-based method called MURCIA (Molecular Unburied Rapid Calculation of Individual Areas) was implemented to accelerate the calculation of SASA for each atom. MURCIA employs two different kernels to determine the neighbors of each atom. The first kernel (K1) uses brute force for the calculation of the neighbors of atoms, while the second one (K2) uses an advanced algorithm involving hardware interpolations via GPU texture memory unit for such purpose. These two kernels yield very similar results. Each kernel has its own advantages depending on the protein size. K1 performs better than K2 when the size is small and vice versa. The algorithm was extensively evaluated for four protein data sets and achieves good results for all of them. This GPU-accelerated version is ∼600 times faster than the former sequential algorithm when the number of the atoms in a protein is up to 105.


Assuntos
Algoritmos , Gráficos por Computador , Entropia , Conformação Proteica , Proteínas/química , Proteínas/metabolismo , Reprodutibilidade dos Testes , Fatores de Tempo
13.
PLoS One ; 8(4): e61892, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23658616

RESUMO

With the performance of central processing units (CPUs) having effectively reached a limit, parallel processing offers an alternative for applications with high computational demands. Modern graphics processing units (GPUs) are massively parallel processors that can execute simultaneously thousands of light-weight processes. In this study, we propose and implement a parallel GPU-based design of a popular method that is used for the analysis of brain magnetic resonance imaging (MRI). More specifically, we are concerned with a model-based approach for extracting tissue structural information from diffusion-weighted (DW) MRI data. DW-MRI offers, through tractography approaches, the only way to study brain structural connectivity, non-invasively and in-vivo. We parallelise the Bayesian inference framework for the ball & stick model, as it is implemented in the tractography toolbox of the popular FSL software package (University of Oxford). For our implementation, we utilise the Compute Unified Device Architecture (CUDA) programming model. We show that the parameter estimation, performed through Markov Chain Monte Carlo (MCMC), is accelerated by at least two orders of magnitude, when comparing a single GPU with the respective sequential single-core CPU version. We also illustrate similar speed-up factors (up to 120x) when comparing a multi-GPU with a multi-CPU implementation.


Assuntos
Algoritmos , Encéfalo/ultraestrutura , Imagem de Difusão por Ressonância Magnética/estatística & dados numéricos , Interpretação de Imagem Assistida por Computador , Software , Encéfalo/fisiologia , Gráficos por Computador/estatística & dados numéricos , Humanos , Masculino , Cadeias de Markov , Método de Monte Carlo
14.
BMC Bioinformatics ; 13 Suppl 14: S13, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23095663

RESUMO

BACKGROUND: Virtual Screening (VS) methods can considerably aid clinical research, predicting how ligands interact with drug targets. Most VS methods suppose a unique binding site for the target, usually derived from the interpretation of the protein crystal structure. However, it has been demonstrated that in many cases, diverse ligands interact with unrelated parts of the target and many VS methods do not take into account this relevant fact. RESULTS: We present BINDSURF, a novel VS methodology that scans the whole protein surface in order to find new hotspots, where ligands might potentially interact with, and which is implemented in last generation massively parallel GPU hardware, allowing fast processing of large ligand databases. CONCLUSIONS: BINDSURF is an efficient and fast blind methodology for the determination of protein binding sites depending on the ligand, that uses the massively parallel architecture of GPUs for fast pre-screening of large ligand databases. Its results can also guide posterior application of more detailed VS methods in concrete binding sites of proteins, and its utilization can aid in drug discovery, design, repurposing and therefore help considerably in clinical research.


Assuntos
Modelos Químicos , Proteínas/química , Software , Sítios de Ligação , Bases de Dados Factuais , Descoberta de Drogas , Avaliação Pré-Clínica de Medicamentos/instrumentação , Avaliação Pré-Clínica de Medicamentos/métodos , Humanos , Ligantes , Ligação Proteica , Proteínas/metabolismo
15.
Brief Bioinform ; 11(3): 313-22, 2010 May.
Artigo em Inglês | MEDLINE | ID: mdl-20038568

RESUMO

P systems or Membrane Systems provide a high-level computational modelling framework that combines the structure and dynamic aspects of biological systems in a relevant and understandable way. They are inherently parallel and non-deterministic computing devices. In this article, we discuss the motivation, design principles and key of the implementation of a simulator for the class of recognizer P systems with active membranes running on a (GPU). We compare our parallel simulator for GPUs to the simulator developed for a single central processing unit (CPU), showing that GPUs are better suited than CPUs to simulate P systems due to their highly parallel nature.


Assuntos
Algoritmos , Biomimética/métodos , Simulação por Computador , Modelos Biológicos , Linguagens de Programação , Software , Biologia/métodos , Design de Software , Integração de Sistemas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...